95% of AI Projects Are Failing. The Problem Isn't the Technology
MIT's Project NANDA released its landmark study, The GenAI Divide: State of AI in Business 2025, and the headline number stopped boardrooms cold: despite $30–40 billion in enterprise AI investment, 95% of generative AI pilots are producing zero measurable impact on the P&L.
But the real insight isn't the failure rate. It's why they're failing — and what that reveals about leadership, not technology.
The Uncomfortable Finding
The MIT researchers found that the divide between the 5% succeeding and the 95% stalling has almost nothing to do with model quality, data infrastructure, or regulation. It comes down to implementation approach. Companies are bolting AI onto existing workflows and expecting transformation. They're buying tools and skipping the harder work of redesigning how people actually operate.
As one CIO in the study put it: "We've seen dozens of demos this year. Maybe one or two are genuinely useful. The rest are wrappers or science projects."
The Questions Leaders Should Be Asking
Before investing another dollar in AI, I'd challenge any leadership team to sit with these:
Are we solving for a workflow or buying a tool? MIT found that purchasing AI from specialized vendors and building genuine partnerships succeeds roughly 67% of the time. Internal builds succeed only a third as often. The difference isn't capability — it's that partnerships force you to define the problem before building the solution.
Who is actually driving adoption? The study found that success correlates with empowering line managers — not centralized AI labs — to lead implementation. The people closest to the work know where AI creates leverage. If your AI strategy lives exclusively in the C-suite or IT department, it's probably in the 95%.
Are we measuring the right things? UC Berkeley's response to the MIT study raised a provocative point: we may be experiencing a measurement failure, not an AI failure. If your only metric is six-month P&L impact, you'll miss efficiency gains, capability building, and competitive positioning that compound over time. But if you have no metrics at all, you're flying blind. The answer is better measurement, not the absence of it.
What's happening in the shadows? MIT documented widespread "shadow AI" — employees using unsanctioned tools like ChatGPT without organizational knowledge or governance. This tells you demand is real, but trust in the official strategy is low. That's a leadership signal, not a technology problem.
The Implication for Leaders
The AI bubble may well deflate — MIT Sloan's Davenport and Bean predict it's likely, and soon. But the technology itself isn't going away. The organizations that will emerge strongest are the ones treating AI adoption as a change management challenge, not a procurement exercise.
That means fewer pilots, more workflow redesign. Fewer demos, more honest measurement. And leadership that is willing to sit in the discomfort of not knowing exactly how this plays out — while still moving deliberately forward.
The 5% aren't smarter. They're more disciplined. And discipline, unlike technology, doesn't require a vendor.
Christopher Fitch is the founder of Marden Fitch, where he advises leadership teams on AI adoption, post-merger integration, and organizational change. His career spans IBM, Accenture, LEGO, and Oxford's Saïd Business School, where he developed 10,000+ global leaders.
Sources: MIT Project NANDA, "The GenAI Divide: State of AI in Business 2025"; MIT Sloan Management Review, "Five Trends in AI and Data Science for 2026" (Davenport & Bean); UC Berkeley Sutardja Center, "Beyond ROI: Are We Using the Wrong Metric in Measuring AI Success?"